Linking the Verification of Compilable Code to the Application Content of Computer Based Systems

نویسنده

  • Egon Börger
چکیده

We explain why for the program verifier challenge proposed in [58] to gain practical impact, one needs to include rigorous definitions and analysis, prior to code development and comprising both experimental validation and mathematical verification, of ground models, i.e. blueprints that describe the application-content of programs. This implies the need to link the relevant properties of such high-level models in a traceable and checkable way to what a compiler can verify. We outline the Abstract State Machines (ASM) method which allows one to bridge the large gap between informal requirements and executable code by combining application-centric experimentally validatable system modeling with mathematically verifiable detailing of abstract models to compile-time-verifiable code. By its definition in [58], the program verifier challenge is focussed on the correctness of programs: software representations of computer-based systems, to-be-compiled by the verifying compiler. As a consequence, “the criterion of correctness is specified by types, assertions and other redundant annotations associated with the code of the program”, where “the compiler will work in combination with other program development and testing tools, to achieve any desired degree of confidence in the structural soundness of the system and the total correctness of its more critical components.” However, compilable code for complex systems is the result of two program development activities, which before reaching the final executable code yield abstract models that have to be checked too as correct: – turning the requirements into ground models. By ground models I understand accurate “blueprints” of the to-be-implemented piece of “real world”, which define the application-centric meaning of programs in an abstract and precise form, not only prior to coding, but also at a level of detailing that is • higher than that of compilable code, • determined by the application problem, • formulated in terms of the application domain, 1 Submitted. Draft of March 28, 2006. Criticism is welcome. – linking ground models to compilable code by a series of refinements, which introduce step by step the details resulting from the design decisions for the implementation. We argue in this paper that a practically relevant verified software project has to be grounded-in-reality by relating the verification of the correctness of compilable programs – to the experimental validation of the application-domain-based semantical correctness for ground models (Section 1) and – to the mathematical verification of their refinements to compilable code (Section 2). We also show (Section 3) that the Abstract State Machine (ASM) method, coming with an accurate notion of ground models [19] and a sufficiently general notion of ASM refinements [20] that scales to systems of industrial size, can establish such a methodical link between problems and their solutions by compilable code. This leads us to formulate a number of research challenges and milestones that we propose to become part of the verified software endeavor (Section 4). 1 ASM Ground Models (System Blueprints): A Semantical Foundation for Program Verification 2 One of the main points where we believe the program verifier challenge as formulated in [58] has to be extended to become of practical interest concerns the apparently implicit assumption that compilable programs constitute the true definition of the system they represent. The assumption expresses a widespread belief. However, in complex applications compilable code rarely “grounds the design in reality”. This holds epistemologically speaking—compilable programs typically provide no correspondence between the extra-logical theoretical terms appearing in the code and their empirical interpretation, as requested by a basic principle of Carnap’s analysis of scientific theories [33]—but also from a practical viewpoint since a definition merely by dozens or hundreds or thousands of thousands of lines of code are not in the range of things a human mind can understand and control reliably. This fact is well-known from numerous famous system breakdowns and the typical ad-hoc character of the fixes that often are in lack of a deep understanding of the system or the real causes for the failure and therefore cannot guarantee that the next breakdown will not occur soon. The fact is also at the basis of the concern expressed in [10] that a verifying software project should not be focussed “on the analysis of artifacts (programs) rather than on their design and construction” since we cannot expect verification tools to inject high reliability into a program that was not designed with reliability in mind from the beginning. 2 A preliminary form of the arguments presented in this section appeared in [19]. We must think about reliability at every point in the software production process. If the starting point for verification is that we are given a program and must attempt to verify it, we are in a losing position because we have so little leverage to affect the design of that program. Defining what the software for a computer-based system is supposed to do takes place during the requirements engineering phase during which a correct understanding-by-humans of the system-to-be-built has to be achieved. Code is the machine-managed (read: executable) representation of what Brooks [32] calls “the conceptual construct” or the “essence” of the software system. The definition of this “conceptual construct” precedes the development of code that implements the definition. We will explain below why and how what we call “ground models” can define this “conceptual construct” of software systems prior to their implementation by compilable code. Before doing this we shortly characterize in the next subsection three basic problems ground models have to solve. We also remark that in addition to checking that the ground model “grounds the design in reality”, the implementation relation between ground models and compilable code makes it mandatory to also check that the transformation of ground models into code (read: by stepwise detailing, also called refinement) preserves this applicationcentric correctness, as will be discussed in Section 2. The preservation of correctness through refinements helps to solve a problem that is hardly tackled if code is taken as system definition, namely to faithfully reflect changing requirements and to document them in a transparent way. We discuss below that checking the correctness of the refinement relation between ground models (read: accurate form of requirements) and code makes changing requirements reliably traceable down to executable code. We conclude the section with an illustration of using ground models for Java/JVM interpreters and for a Java-to-JVM compiler to provide a framework that guarantees at compile time that the generated bytecode will pass the verifier. 1.1 Three problems for requirements capture by ground models The notoriously difficult and error prone elicitation of requirements is largely a formalization task, in the sense of an accurate task formulation. It has to realize the transition from usually natural-language problem descriptions to a sufficiently exact, unambiguous, consistent, complete and minimal formulation of “precisely what to build” [32], namely by a ground model that represents the algorithmic content of the software contract. Such a formalization task necessitates the solution of three main problems by ground models, namely concerning communication, verification and validation. 3 We adopt the widespread use of this bombastic term to denote requirements capture, analysis and documentation. Communication First of all ground models must be apt to mediate between the application domain, where the task originates which is to be accomplished by the system to be built, and the world of models, where the relevant piece of reality has to be represented. This is mainly a language and communication problem between the software designers and the domain experts or customers—in a muli-disciplinary project they will come from completely different disciplines, see for example the DIRC project reported in [60]—the parties who prior to coding have to come to a common understanding of “what to build”, to be documented in a contract containing a model which can be inspected by the involved parties. The language in which the ground model is formulated must be appropriate to naturally yet accurately express the relevant features of the given application domain and to be easily understandable by the two parties. This includes the capability to calibrate the degree of precision of the language to the given problem, so as to support the concentration on domain issues instead of issues of notation. It also means that the modeling language should come with a general (conceptual and application-oriented) data model together with a general function model (for a process-oriented definition of the system dynamics) and a general interface concept to represent system environments (consisting of the system users and of neighboring systems or applications) and state-based system behavior. The communication problem is not restricted to the requirements engineering phase and the parties involved there. It also appears where different groups, possibly working at different places, or multiple members of a large group, have to cooperate on the construction of one software system, like designers, programmers, testers, maintenance experts, etc. It is crucial for a realistic verifiedprograms-project to work with an open yet coherent and accurate conceptual framework that is simple and general enough to solve this communication problem. A restriction to the language of high-level programming languages does not solve the communication problem, as is well explained in [5]. Verification The second formalization problem is a verification-method problem. It is of epistemological nature and stems from the fact that there are no mathematical means to prove the correctness of the transition from an informal to a precise description. Every chain of models, which formalizes given requirements and comes for each model with a mathematical correctness proof with respect to its predecessor, must end with one primary model, which can be related to the requirements only in a direct way, trying to reach by inspection some kind of evidence of the desired correspondence between the model and the reality the model is supposed to capture. This is analogous to Aristotle’s observation in the Analytica Posteriora that to provide a foundation for a scientific theory no infinite regress is possible and that the first one of every chain of theories has 4 For this reason ground models [17] were originally called primary models [16, Sect. 3]. to be justified by “evident” axioms. Such an “evidence” of correctness is what ground model inspection has to provide. Two kinds of means are needed to establish that a ground model is complete and consistent, that it reflects the original intentions and that these are correctly conveyed – together with all the necessary underlying application-domain knowledge – to the designer. To check the completeness property, which is clarified further below, it must be possible to proceed via inspection of ground models by the application-domain expert. But also appropriate forms of domain-specific reasoning, not limited to formal deductions in a priori determined logic systems, have to be available to support the designer in formally checking the internal consistency of the model, as well as the consistency of different system views. Such a view consistency often is the result of an involved and complex process of resolving conflicting objectives in the original requirements. We believe that these two complementary forms of ground model verification are crucial for a realistic requirements-capture method, though in practice reasoning-based checking of ground model properties often is of less importance than concept-focussed model inspection (see, e.g., [86, 52]). Having both forms of ground model verification provides a framework to extend the verified software project to what Holzmann calls “fail-proof systems” [59], i.e. reliable systems that may not come with zero-defect code and may be built from unreliable parts. Validation The third formalization problem is a validation problem. It must be possible to perform experiments with the ground model where the behavior of the model can be observed under under given conditions, in particular to simulate it for running relevant scenarios (use cases), providing a framework for – systematic attempts to “falsify” the model in the Popperian sense [68] against the to-be-encoded piece of reality, – runtime verification. This empirical criterion also takes into account that computer-based systems are not purely intellectual artefacts but inserted in a real-world environment, which offers itself more to testing methods than to mathematical verification techniques. Furthermore, use cases often are part of the requirements and thus directly reflectable through simulations. In case an entire system is conceived as defined by executable specifications of use cases (see for example [54]), this is 5 Certainly the epistemological status of the underlying concept of evidence has to be clarified. See for example Carnap’s confirmation theory or the discussion on the role of axioms in science, e.g. in the controversy between Frege, who held a “platonistic” view, and Hilbert, who held a “formalistic” position, on the role of axioms for a foundation of mathematical theories, see [11]. 6 Providing a precise ground against which questions can be formulated, ground models support the Socratic method of asking “ignorant questions” [13] to check whether the semantic interpretation of the informal problem description is correctly captured by the mapping to the terms in the mathematical model. captured by the corresponding run segments (simulations) in the ground model. It is an important technical side-effect that simulations also allow one to define – prior to coding – a precise system-acceptance test plan and thus to use a ground model in two roles: (1) as an accurate requirements specification (to be matched by the application-domain expert against the given requirements) and (2) as a test model (to be matched by the tester against executions of the final code), where we consider environmental conditions as part of the requirements. These two roles support the combination of runtime verification with automatic test generation of the type proposed in [9] and in general of model-based testing [84]. See also the discussion for ASM ground models below. 1.2 Notion of Ground Models By its epistemological role of relating some piece of “reality” to a linguistic description, the concept of ground model has no purely mathematical definition, though it can be given a scientific definition in terms of basic epistemological concepts which have been elaborated for empirical sciences by analytic philosophers, see for example [50, 51]. To be appropriate as high-level models for complex real-life systems, also under industrial constraints, ground models have to possess the following essential properties which characterize the notion. Ground models must be: – precise at the appropriate level of detailing yet flexible, to satisfy the required accuracy exactly, without adding unnecessary precision; – simple and concise to be understandable and acceptable as contract by the two parties involved, domain experts and system designers. As we will argue below, ASM ground models allow one to achieve this property mainly by avoiding any extraneous encoding and by reflecting “directly”, through the abstractions, the structure of the real-world problem. This makes ground models manageable for inspection and analysis, helps designers to resolve the “lack of scientific understanding on the part of their customers (and themselves)” [58, p.66] and enables experts to “clearly explain why . . . systems indeed work correctly” [5]; – abstract (minimal) yet complete. • Completeness means that every semantically relevant feature is present, that all contract benefits and obligations are mentioned and that there are no hidden clauses. In particular, a ground model must contain as interface all semantically relevant parameters concerning the interaction with the environment, and where appropriate also the basic architectural system structure. The completeness property “forces” the requirements engineer, as much as this is possible, to produce a model which is “closed” modulo some “holes”, which are however explicitly delineated, including a statement of the assumptions made for them at the abstract level. How such assumptions will be realized depends on the particular case: for external devices it is the role of the devices to guarantee 7 A frequent case of such “holes” is represented by external technical devices, which interact via sensors and actuators with the software to-be-built to control them. the assumptions, for internal software components the assumptions have to be guaranteed through the detailed specification via subsequent refinements. Model “closure” implies that no gap in the understanding of “what to build” is left, that every relevant portion of implicit domain knowledge has been made explicit and that there is no missing requirement—avoiding a typical type of software errors that are hard to detect at the level of compilable code [69, Fact 25]. • Minimality means that the model abstracts from details that are relevant either only for the further design or only for a portion of the application domain which does not influence the system to be built; – validatable (see [56]) and thus in principle falsifiable by experiment (executing the ground model on characteristic instances) and rigorous analysis, satisfying the basic Popperian criterion for scientific models [68]; – equipped with a simple yet precise semantical foundation as a prerequisite for rigorous analysis and reliable tool support. Thus software system ground models can be defined as “blueprints” of the to-be-implemented piece of “real world” which “ground the system design in the reality”. Obviously there can be a multiplicity of different ground models for one system, since there are usually many ways to abstract from only implementation-relevant details. Also changing requirements can yield different ground models, see the explanations below. This reflects the intrinsic creativity of defining ground models, an activity which can never be completely mechanized, although one can learn many rules of thumb. In the case of a reengineering project it can also happen that the code is the ground model, from which a highlevel model is to be abstracted–maybe to be shown to be at least in part correctly refined by the existing code–before refining the abstract model to the new code. Language Conditions for Defining Ground Models Unfortunately it is still strongly debated what kind of language is suited to express ground models and which methods are appropriate for their analysis. To satisfy the above listed ground model properties and to serve as basis for a practical program verification project, neither can the ground model language be confined to the syntax of some particular logic nor can the means of analysis be restricted to a-priori-fixed rule-based (a fortiori if mechanized) reasoning schemes, contrary to what some formulations in [58] seem to suggest and also contrary to the view hold in [66] that “the “verification problem” is the theorem proving problem, restricted to a computational logic”. It would not solve the communication problem since the thorough training in mathematical logic it requires goes beyond the expertise that can reasonably be expected from software practitioners or domain experts. In addition, the purely declarative, non-executable character which is intrinsic for logical, purely axiomatic system descriptions does not solve the validation Here the role of ground models is to define the behaviour of the whole system, as it is supposed to happen in the real world; the specification of the software control system can be extracted from the ground model. See [6, 12] for an example. problem. In fact, most of the successful formal method tools, e.g. model checkers or theorem provers, are used for the verification of internal properties of accurate models or of refinements which relate accurate models, much more than to formulate ground models and to relate them to the encoded piece of reality; see for example the successful practical applications of the B-method [2, 1]. To understand what is needed, the pragmatic approach of applied mathematical sciences can help, where each time the degree of rigor (read: of formalization or of detailing) used for definitions and proofs is chosen to suit the problem under study. Here this means that we need a framework that supports intuitive, content-oriented, precise modeling and reasoning, the way domain experts use it for high-level process-oriented system requirement descriptions and software practitioners for their work with pseudo-code. The need to be able to tailor ground models to resemble the structure and to reflect the degree of detail of the targeted real-world problem implies for the used language to offer broadspectrum data and process-control features: the ability to directly speak about arbitrary objects, their properties, their relations with other objects, the operations one can perform on them under specific conditions. The well-known mathematical concept of structure reflects this general concept of not necessarily implementable data types, whereas the computational aspect of changing data values is naturally expressed using dynamic-change expressions (rules) of the form if Event then Actions used in describing behavioral processes as well as processes of thought. Technically rules of this form are omnipresent in scientific descriptions and occur in particular in UML state or activity diagrams, which are built from branching (condition checking) nodes and action nodes. For the sake of generality, the Events have to be allowed to express any static or dynamic, process-internal or environmental properties or relations among the relevant objects. Actions should be usable to describe any dynamic (typically local) state change using any of the underlying internal or external operations. In order not to miss the generality and simplicity needed for the ground model language, it is important not to divorce the declarative expression of events (conditions) from the operational character of state-changing computational actions. Using ASMs for Defining Ground Models The language of Abstract State Machines (ASMs), a natural extension of the language of Finite State Machines (FSMs) working over arbitrary structures (see [21] and the original definition in [47]), defines ASMs to be given as sets of rules as above. 8 This is exactly the opposite of the view taken in some purely logico-axiomatic approaches, as advocated for example in [53, p.89] where it is explained that “the most important characteristic of Z, which singles it out from every other formal method, is that it is completely independent of any idea of computation.” – Events are instantiated as arbitrary conditions (of the underlying signature). This generalizes the firing condition ctl state = i ∧ input = a of FSM transitions, which requires the FSM to be in a particular internal (control) state i upon reading a particular input (symbol) a. – Actions are instantiated as sets of Updates of arbitrary memory locations f , which are allowed to be parameterized by parameters a1, . . . , an of whatever type. Also the new values are allowed to be of whatever type. This generalizes the effect of FSM instructions, which update two locations to yield an output value and a change of the internal state ctl state. Memory locations and their values are described by arbitrary terms t , ti (not only (numbers for) internal states and input/output symbols as for FSMs), built from static and dynamic internal or external operations that are present in the underlying structure, so that the updates take the form and the mathematical meaning of assignments f (t1, . . . , tn) := t . For more details see Section 3. Thus ASMs constitute, through instantiating without any loss of generality events and actions to a mathematical concept, a conceptually simple mathematical framework for building ground models that satisfy all the properties listed above. In fact, the use of ASMs for ground models solves the language and communication problem due to the simple language to define ASMs, which uses only the fundamental if then construct of human thought that is understood by everybody who learnt the general language of science, whether domain expert, system designer or programmer. The use of ASMs for ground models also solves the verification problem since it allows one to apply both inspection – for checking the model correctness and completeness with respect to the problem to be solved – and reasoning to analyze its consistency, using whatever reasoning means are appropriate, ranging from intuitive considerations to formalized mechanically checkable proofs. The notation used with ASMs does not limit the verification space. It is important for the practical success of the ASM method that it advocates a systematic separation of concerns, in particular to separate design from verification and within verification different degrees of detailing justification chains. Last but not least the validation problem is solved by the operational character of ground model ASMs, which come with a standard notion of process, computation or “run”. Simulations of ground models are possible mentally, as typically used in proofs of run-time properties, or using various tools which make large classes of ASMs executable (see [30, Chapter 8]). In addition, the operational character of ASMs supports defining in abstract run-time terms the expected system effect on samples – the so-called oracle definition which can be used for static testing, where the code is inspected and compared to the specification, but also for dynamic testing where the execution results are compared. Furthermore, ASM ground models can be used to guide the user in the 9 This pragmatic scientific attitude is in contrast with the widely held belief that “the central concepts in software verification are program code and formal proofs” [76], a view that seems to underly also the program verification project formulation in [58]. application-domain-driven selection of test cases, exhibiting in the specification the relevant environment parts and the properties to be checked, showing how to derive (specify or generate) test cases from use cases. In particular in this way one can integrate into the ASM method powerful verification techniques for automating the test case generation, like model-checking, SAT solvers and constraint satisfaction algorithms. By appropriately refining the oracle, one can also specify and implement a comparator by determining for runs of the ground model and the code what are the states of interest to be related (spied), the locations of interest to be watched, and when their comparison is considered successful (the test equivalence relation). These features for specifying a comparator, using the knowledge about how the oracle is refined, reflect the ingredients of the general notion of ASM refinements we discuss in Section 2. In Section 3 we briefly survey some of the outstanding successful applications of ASM ground models for the design and analysis of complex systems. Before concluding this section we illustrate a use of ASM ground models for (a form of) bytecode verification. In [80] ASM ground models are developed for interpreters of Java and of the Java Virtual Machine (JVM), including a bytecode verifier, together with a high-level definition of a Java-to-JVM compiler. This compiler, which is proven to be correct for legal and well-typed programs, is refined to a form of certifying compiler by annotating the instructions issued by the compiler with type information that can be used to prove the typability of the generated code [80, Theorem 16.5.1]. This is a mathematically accurate form of Sun’s off-device pre-verification (without inlining subroutines) and guarantees at compile time that the generated code will pass the bytecode verifier. Apparently it still represents a challenge for current computer-based theorem proving systems to mechanize these bytecode verification proofs for bytecode compiled from Java programs (or C# programs, see [26, 42]). 2 ASM Refinements: Management of Design Decisions, their Verification and Documentation Typically there is more than one step to go from a ground model to compilable code. “. . . the specification of a large system is not a monolithic text but rather a succession of more and more precise mathematical models taking account gradually of the requirements of the future system.” [5]. The characteristic phenomenon that occurs during this process, which eventually yields the definition of the compilable code, is known as “explosion of ‘derived requirements’ (the requirements for a particular design solution), caused by the complexity of the solution process” and encountered “when moving from requirements to design” [69, Fact 26]. The numerous and often orthogonal design decisions taken during this process have to be integrated into the link one has to establish between the ground model analysis and the verification of compilable code. The question is how to link the ground model through the intermediate models to compilable code in such a way that the code verification by the compiler 10 For the formulation of concrete research challenges in this direction see [61]. can be traced back to the validation or verification of the ground model and vice versa. This is the role of the classical refinement method [88, 37]. The underlying refinement concept has been generalized to ASMs (for references see Section 3 and [20] for a recent survey) with the goal to support practical system validation and verification techniques that scale up to complex systems and make changing requirements traceable. Differently from most refinement concepts in the literature, ASM refinements are not necessarily syntax-directed but may concern different components which are all affected by some common feature, e.g. security. Nevertheless also particular forms of refinement can be defined which are compositional, for example analogues of the syntax-directed refinement notions of the B-method [2]. ASM refinements allow one to split checking complex detailed properties into a series of simpler checks of more abstract properties and their correct refinement, following the path the designer has chosen to rigorously link through various levels of abstraction the system architect’s view (at the abstraction level of a blueprint) to the programmer’s view (at the level of detail of compilable code). In addition, successive ASM refinements provide a systematic code development documentation, which supports design reuse and code maintenance and includes behavioral information by state-based abstractions, thus leading to “further improvements to quality and functionality of the code . . . by good documentation of the internal interfaces” [58, p.66]. The practitioner can use ASM refinements in his daily work for reasons which were mentioned already in the previous section for ASM ground models: namely the transition from one to a more refined model, or viceversa in the case of a reengineering project, can be fine-tuned to the new details one wants to introduce, without being hindered by any notational overhead. Using a chain of stepwise refined models enhances the designer’s activity, in particular when it comes to react to so-called changing requirements. Having stepwise refined models at their disposal enables the designer and the system maintenance expert to exactly localize the “right” level of abstraction where a desired change has to be performed and from where it has to be transfered to the more detailed lower levels. This supports an explicit tracing of requirements changes from the ground model to code, in a particularly simple way when the changes are purely incremental so that they can be captured by conservative model extensions. Purely incremental requirements changes give rise to multiple ground models, each one reflecting one set of requirements. “Freezing” a set of requirements in one model does not prevent changing that set and formalizing it by a refined ground model. This is the place where the ideas about “regression verification” proposed in [81] can be used. A good refinement strategy aims in particular at encapsulating orthogonal features, which can be added incrementally to models in different ways. Therefore a sequence of successive changes down to executable code, triggered by changing a particular feature at a specific level of abstraction, does not produce extraneous additional work but is nothing else than introducing all the details which are needed anyway, however in fully documented single steps rather than in one blow. This makes it easier to understand the changed implementation details and to control their effect on the entire system. As observed in [5], collections of models at different refinement levels can be exploited also for an efficient composition of large systems, namely by selecting the models that are appropriate for the needed software system features, possibly adapting them by refining them further or implementing them in a particular programming language, with tools and theories that suit these features and facilitate their verification along the lines of the model composition. An example from the area of programming languages is found in the incremental development of models for Java and the JVM in orthogonal layers (see [80]), similarly for C# and the .NET CLR (see [26, 42]), supporting instruction-wise descriptions of individual programming constructs one can put together as needed for a language definition; see also [31] where interpreters for Java and C# are derived by instantiating a general scheme for the interpretation of object-oriented language features. Such a component-wise system definition also supports verifiable definitions of the structure and the semantical content of managed code, moving away from the classical static compile-link-run model of language semantics to where meta-programming, generative programming and multistage programming are leading, namely to work with VM-based (interpreted or compiled) managed code and managed execution. Programs are composed and generated from separately definable code patterns, code fragments written in different languages and/or components according to directives expressed through metadata, instantiating a general problem solution to particular cases of the problem. We now shortly illustrate the definition of the ASM refinement scheme. In Section 3 we list some examples for refinement hierarchies. ASM Refinement Scheme In choosing how to refine an ASM M to an ASM M ∗, one has the freedom to define the following items, as illustrated by Fig. 1: – a notion (signature and intended meaning) of refined state, – a notion of states of interest and of correspondence between M -states S and M ∗-states S∗ of interest, i.e. the pairs of states in the runs one wants to relate through the refinement, including usually the correspondence of initial and (if there are any) of final states, – a notion of abstract computation segments τ1, . . . , τm , where each τi represents a single M -step, and of corresponding refined computation segments σ1, . . . , σn , of single M ∗-steps σj , which in given runs lead from corresponding states of interest to (usually the next) corresponding states of interest (the resulting diagrams are called (m,n)-diagrams and the refinements (m,n)-refinements), – a notion of locations of interest and of corresponding locations, i.e. pairs of (possibly sets of) locations one wants to relate in corresponding states, – a notion of equivalence ≡ of the data in the locations of interest; these local data equivalences usually accumulate to a notion of equivalence of corresponding states of interest. σ1 · · · σn | {z } n steps of M ∗ State S∗ S∗ 6

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Linking the Meaning of Programs to What the Compiler Can Verify

We formulate some research and development challenges that relate what a verifying compiler can verify to the definition and analysis of the application-content of programs, where the analysis comprises both experimental validation and mathematical verification. We also point to a practical framework to deal with theses challenges, namely the Abstract State Machines (ASM) method for high-level ...

متن کامل

Formal Method in Service Composition in Heath Care Systems

One of the areas with greatest needs having available information at the right moment and with high accuracy is healthcare. Right information at right time saves lives. Healthcare is a vital domain which needs high processing power for high amounts of data. Due to the critical and the special characteristics of these systems, formal methods are used for specification, description and verificati...

متن کامل

Code Generation for Timed Automata System Specifications ConsideringTarget Platform Resource-Restrictions

* Department of Computer Science, University of Chemnitz, Chemnitz, Germanny. Abstract A transformation concept and the resulting code generator for the automated implementation of timed automata, given as UPPAAL specification, are proposed in this paper. The concept aims to generate code for data memory restricted target platforms, like embedded systems and sensor network nodes. The output of ...

متن کامل

Safety Verification of Real Time Systems Serving Periodic Devices

In real-time systems response to a request from a controlled object must be correct and timely. Any late response to a request from such a device might lead to a catastrophy. The possibility of a task overrun, i.e., missing the deadline for completing a requested task, must be checked and removed during the design of such systems. Safe design of real-time systems running periodic tasks under th...

متن کامل

A New WordNet Enriched Content-Collaborative Recommender System

The recommender systems are models that are to predict the potential interests of users among a number of items. These systems are widespread and they have many applications in real-world. These systems are generally based on one of two structural types: collaborative filtering and content filtering. There are some systems which are based on both of them. These systems are named hybrid recommen...

متن کامل

A Trust Based Probabilistic Method for Efficient Correctness Verification in Database Outsourcing

Correctness verification of query results is a significant challenge in database outsourcing. Most of the proposed approaches impose high overhead, which makes them impractical in real scenarios. Probabilistic approaches are proposed in order to reduce the computation overhead pertaining to the verification process. In this paper, we use the notion of trust as the basis of our probabilistic app...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2006